Members
Overall Objectives
Research Program
Application Domains
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: Research Program

Inverse problems in Neuroimaging

Many problems in neuroimaging can be framed as forward and inverse problems. For instance, brain population imaging is concerned with the inverse problem that consists in predicting individual information (behavior, phenotype) from neuroimaging data, while the corresponding forward problem boils down to explaining neuroimaging data with the behavioral variables. Solving these problems entails the definition of two terms: a loss that quantifies the goodness of fit of the solution (does the model explain the data well enough ?), and a regularization scheme that represents a prior on the expected solution of the problem. These priors can be used to enforce some properties on the solutions, such as sparsity, smoothness or being piece-wise constant.

Let us detail the model used in typical inverse problem: Let 𝐗 be a neuroimaging dataset as an (nsubjects,nvoxels) matrix, where nsubjects and nvoxels are the number of subjects under study, and the image size respectively, 𝐘 a set of values that represent characteristics of interest in the observed population, written as (nsubjects,nfeatures) matrix, where nfeatures is the number of characteristics that are tested, and β an array of shape (nvoxels,nfeatures) that represents a set of pattern-specific maps. In the first place, we may consider the columns 𝐘1,..,𝐘nfeatures of Y independently, yielding nfeatures problems to be solved in parallel:

𝐘 i = 𝐗 β i + ϵ i , i { 1 , . . , n f e a t u r e s } ,

where the vector contains βi is the ith row of β. As the problem is clearly ill-posed, it is naturally handled in a regularized regression framework:

where Ψ is an adequate penalization used to regularize the solution:

with λ1,λ2,η1,η20 (this formulation particularly highlights the fact that convex regularizers are norms or quasi-norms). In general, only one or two of these constraints is considered (hence is enforced with a non-zero coefficient):

Note that, while the qualitative aspect of the solutions are very different, the predictive power of these models is often very close.

Figure 1. Example of the regularization of a brain map with total variation in an inverse problem. The problem here is to predict the spatial scale of an object presented as a stimulus, given functional neuroimaging data acquired during the presentation of an image. Learning and test are performed across individuals. Unlike other approaches, Total Variation regularization yields a sparse and well-localized solution that also enjoys high predictive accuracy.
IMG/inter_sizes_alpha1.png

The performance of the predictive model can simply be evaluated as the amount of variance in 𝐘i fitted by the model, for each i{1,..,nfeatures}. This can be computed through cross-validation, by learning β^i on some part of the dataset, and then estimating 𝐘i-𝐗β^i2 using the remainder of the dataset.

This framework is easily extended by considering